12 research outputs found

    Detecting Manipulations in Video

    Get PDF
    This chapter presents the techniques researched and developed within InVID for the forensic analysis of videos, and the detection and localization of forgeries within User-Generated Videos (UGVs). Following an overview of state-of-the-art video tampering detection techniques, we observed that the bulk of current research is mainly dedicated to frame-based tampering analysis or encoding-based inconsistency characterization. We built upon this existing research, by designing forensics filters aimed to highlight any traces left behind by video tampering, with a focus on identifying disruptions in the temporal aspects of a video. As for many other data analysis domains, deep neural networks show very promising results in tampering detection as well. Thus, following the development of a number of analysis filters aimed to help human users in highlighting inconsistencies in video content, we proceeded to develop a deep learning approach aimed to analyze the outputs of these forensics filters and automatically detect tampered videos. In this chapter, we present our survey of the state of the art with respect to its relevance to the goals of InVID, the forensics filters we developed and their potential role in localizing video forgeries, as well as our deep learning approach for automatic tampering detection. We present experimental results on benchmark and real-world data, and analyze the results. We observe that the proposed method yields promising results compared to the state of the art, especially with respect to the algorithm’s ability to generalize to unknown data taken from the real world. We conclude with the research directions that our work in InVID has opened for the future

    Computational methods for the detection of facial palsy

    No full text
    We are developing a telemedicine application which offers automated diagnosis of facial (Bell's) palsy through a Web service. We used a test data set of 43 images of facial palsy patients and 44 normal people to develop the automatic recognition algorithm. Three different image pre-processing methods were used. Machine learning techniques (support vector machine, SVM) were used to examine the difference between the two halves of the face. If there was a sufficient difference, then the SVM recognized facial palsy. Otherwise, if the halves were roughly symmetrical, the SVM classified the image as normal. It was found that the facial palsy images had a greater Hamming Distance than the normal images, indicating greater asymmetry. The median distance in the normal group was 331 (interquartile range 277-435) and the median distance in the facial palsy group was 509 (interquartile range 334-703). This difference was significant (

    <i>AgroAId</i>: A Mobile App System for Visual Classification of Plant Species and Diseases Using Deep Learning and TensorFlow Lite

    No full text
    This paper aims to assist novice gardeners in identifying plant diseases to circumvent misdiagnosing their plants and to increase general horticultural knowledge for better plant growth. In this paper, we develop a mobile plant care support system (“AgroAId”), which incorporates computer vision technology to classify a plant’s [species–disease] combination from an input plant leaf image, recognizing 39 [species-and-disease] classes. Our method comprises a comparative analysis to maximize our multi-label classification model’s performance and determine the effects of varying the convolutional neural network (CNN) architectures, transfer learning approach, and hyperparameter optimizations. We tested four lightweight, mobile-optimized CNNs—MobileNet, MobileNetV2, NasNetMobile, and EfficientNetB0—and tested four transfer learning scenarios (percentage of frozen-vs.-retrained base layers): (1) freezing all convolutional layers; (2) freezing 80% of layers; (3) freezing 50% only; and (4) retraining all layers. A total of 32 model variations are built and assessed using standard metrics (accuracy, F1-score, confusion matrices). The most lightweight, high-accuracy model is concluded to be an EfficientNetB0 model using a fully retrained base network with optimized hyperparameters, achieving 99% accuracy and demonstrating the efficacy of the proposed approach; it is integrated into our plant care support system in a TensorFlow Lite format alongside the front-end mobile application and centralized cloud database. Finally, our system also uses the collective user classification data to generate spatiotemporal analytics about regional and seasonal disease trends, making these analytics accessible to all system users to increase awareness of global agricultural trends

    AgroAId: A Mobile App System for Visual Classification of Plant Species and Diseases Using Deep Learning and TensorFlow Lite

    No full text
    This paper aims to assist novice gardeners in identifying plant diseases to circumvent misdiagnosing their plants and to increase general horticultural knowledge for better plant growth. In this paper, we develop a mobile plant care support system (&ldquo;AgroAId&rdquo;), which incorporates computer vision technology to classify a plant&rsquo;s [species&ndash;disease] combination from an input plant leaf image, recognizing 39 [species-and-disease] classes. Our method comprises a comparative analysis to maximize our multi-label classification model&rsquo;s performance and determine the effects of varying the convolutional neural network (CNN) architectures, transfer learning approach, and hyperparameter optimizations. We tested four lightweight, mobile-optimized CNNs&mdash;MobileNet, MobileNetV2, NasNetMobile, and EfficientNetB0&mdash;and tested four transfer learning scenarios (percentage of frozen-vs.-retrained base layers): (1) freezing all convolutional layers; (2) freezing 80% of layers; (3) freezing 50% only; and (4) retraining all layers. A total of 32 model variations are built and assessed using standard metrics (accuracy, F1-score, confusion matrices). The most lightweight, high-accuracy model is concluded to be an EfficientNetB0 model using a fully retrained base network with optimized hyperparameters, achieving 99% accuracy and demonstrating the efficacy of the proposed approach; it is integrated into our plant care support system in a TensorFlow Lite format alongside the front-end mobile application and centralized cloud database. Finally, our system also uses the collective user classification data to generate spatiotemporal analytics about regional and seasonal disease trends, making these analytics accessible to all system users to increase awareness of global agricultural trends

    IoT Based Smart City Bus Stops

    No full text
    The advent of smart sensors, single system-on-chip computing devices, Internet of Things (IoT), and cloud computing is facilitating the design and development of smart devices and services. These include smart meters, smart street lightings, smart gas stations, smart parking lots, and smart bus stops. Countries in the Gulf region have hot and humid weather around 6&ndash;7 months of the year, which might lead to uncomfortable conditions for public commuters. Transportation authorities have made some major enhancements to existing bus stops by installing air-conditioning units, but without any remote monitoring and control features. This paper proposes a smart IoT-based environmentally - friendly enhanced design for existing bus stop services in the United Arab Emirates. The objective of the proposed design was to optimize energy consumption through estimating bus stop occupancy, remotely monitor air conditioning and lights, automatically report utility breakdowns, and measure the air pollution around the area. In order to accomplish this, bus stops will be equipped with a WiFi-Based standalone microcontroller connected to sensors and actuators. The microcontroller transmits the sensor readings to a real-time database hosted in the cloud and incorporates a mobile app that notifies operators or maintenance personnel in the case of abnormal readings or breakdowns. The mobile app encompasses a map interface enabling operators to remotely monitor the conditions of bus stops such as the temperature, humidity, estimated occupancy, and air pollution levels. In addition to presenting the system&rsquo;s architecture and detailed design, a system prototype is built to test and validate the proposed solution
    corecore